在数据挖掘,神经科学和化学计量学在内的各个领域,分析各种数据集中的多路测量结果是一个挑战。例如,测量可能会随着时间的流逝而发展或具有不一致的时间曲线。 PARAFAC2模型已成功地用于分析此类数据,通过在一种模式(即演变模式)下允许基础因子矩阵跨切片进行更改。拟合PARAFAC2模型的传统方法是使用基于最小二乘的交替算法,该算法通过隐式估计不断发展的因子矩阵来处理Parafac2模型的恒定交叉产生约束。这种方法使对这些因素矩阵充满挑战。目前尚无算法可以灵活地将这种正规化施加,并具有一般的惩罚功能和硬性约束。为了应对这一挑战并避免隐性估计,在本文中,我们提出了一种算法,用于拟合PARAFAC2基于与乘数交替方向方法(AO-ADMM)的交替优化拟合parafac2。通过在模拟数据上进行数值实验,我们表明所提出的PARAFAC2 AO-ADMM方法允许灵活约束,准确地恢复了基础模式,并且与先进的ART相比,计算有效。我们还将模型应用于神经科学和化学计量学的两个现实世界数据集,并表明限制发展模式可改善提取模式的解释性。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
在边缘计算中,必须根据用户移动性迁移用户的服务配置文件。已经提出了强化学习(RL)框架。然而,这些框架并不考虑偶尔的服务器故障,尽管很少会阻止Edge Computing用户的延迟敏感应用程序(例如自动驾驶和实时障碍物检测)的平稳和安全功能,因为用户的计算作业不再是完全的。由于这些故障的发生率很低,因此,RL算法本质上很难为数据驱动的算法学习针对典型事件和罕见事件方案的最佳服务迁移解决方案。因此,我们引入了罕见的事件自适应弹性框架火,该框架将重要性采样集成到加强学习中以放置备份服务。我们以与其对价值函数的贡献成正比的稀有事件进行采样,以学习最佳政策。我们的框架平衡了服务迁移和迁移成本之间的迁移权衡,与失败的成本以及备份放置和移民的成本。我们提出了一种基于重要性抽样的Q-学习算法,并证明其界限和收敛到最佳性。随后,我们提出了新的资格轨迹,我们的算法的线性函数近似和深Q学习版本,以确保其扩展到现实世界情景。我们扩展框架,以适应具有不同风险承受失败的用户。最后,我们使用痕量驱动的实验表明我们的算法在发生故障时会降低成本。
translated by 谷歌翻译
本报告介绍了第七次机器翻译会议(WMT22)的通用机器翻译任务的自动评估。它总共评估了21个翻译方向的185个系统,包括高资源至低资源语言对以及与遥远语言密切相关的系统。这种大规模的自动评估突出了最新机器翻译系统的一些当前限制。它还显示了自动指标,即CHRF,BLEU和COMET,如何在解释性和准确性方面进行补充以减轻自己的限制。
translated by 谷歌翻译
作为对隐喻分析的贡献,我们介绍了一项基于统计的基于数据的研究,并对长期存在的猜想和对隐喻系统特征的有史以来的经验探索进行了经验分析。相反,这也使隐喻理论可作为含义出现的基础,可以定量探索并集成到NLP的框架中。
translated by 谷歌翻译
本文重点介绍了重叠的语音和性别检测,以研究法国视听媒体中男女之间的互动(性别平等监测项目)。在这种应用程序上下文中,我们需要根据说话者的性别自动划分语音信号,并确定至少有两个说话者同时讲话。我们建议使用WAVLM模型,该模型具有在大量语音数据上进行预训练的优点,以构建重叠的语音检测(OSD)和性别检测(GD)系统。在这项研究中,我们使用两个不同的语料库。 Dihard III语料库非常适合OSD任务,但缺乏性别信息。盟友语料库符合项目申请上下文。我们最好的OSD系统是具有WAVLM预训练功能作为输入的时间卷积网络(TCN),该功能达到了Dihard上最先进的F1得分性能。神经GD在法国广播新闻盟友数据的性别平衡子集上接受了WAVLM输入的培训,并获得了97.9%的准确性。这项工作为人类科学研究人员开辟了有关法国媒体中男女表示差异的新观点。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
几种慢性肺疾病,例如特发性肺纤维化(IPF)的特征是气道异常扩张。计算机断层扫描(CT)上气道特征的定量可以帮助表征疾病进展。已经开发了基于物理的气道测量算法,但由于在临床实践中看到的气道形态多样性,因此取得了有限的成功。由于获得精确的气道注释的高成本,监督学习方法也不可行。我们建议使用感知损失通过样式转移进行综合气道,以训练我们的模型气道转移网络(ATN)。我们使用a)定性评估将ATN模型与最先进的GAN网络(SIMGAN)进行比较; b)评估基于ATN和SIMGAN的CT气道指标预测113例IPF患者死亡率的能力。与Simgan相比,ATN被证明更快,更容易训练。还发现基于ATN的气道测量值始终比IPF CTS上的SIMGAN衍生气道指标更强大。通过转化网络使用感知损失来完善合成数据的转化网络是基于GAN的方法的现实替代方法,用于用于特发性肺纤维化的临床CT分析。我们的源代码可以在https://github.com/ashkanpakzad/atn上找到,该源代码与Airquant的现有开放源气道分析框架兼容。
translated by 谷歌翻译